488 research outputs found

    Deep Quantigraphic Image Enhancement via Comparametric Equations

    Full text link
    Most recent methods of deep image enhancement can be generally classified into two types: decompose-and-enhance and illumination estimation-centric. The former is usually less efficient, and the latter is constrained by a strong assumption regarding image reflectance as the desired enhancement result. To alleviate this constraint while retaining high efficiency, we propose a novel trainable module that diversifies the conversion from the low-light image and illumination map to the enhanced image. It formulates image enhancement as a comparametric equation parameterized by a camera response function and an exposure compensation ratio. By incorporating this module in an illumination estimation-centric DNN, our method improves the flexibility of deep image enhancement, limits the computational burden to illumination estimation, and allows for fully unsupervised learning adaptable to the diverse demands of different tasks.Comment: Published in ICASSP 2023. For GitHub code, see https://github.com/nttcslab/con

    Exploring Feature Representation Learning for Semi-supervised Medical Image Segmentation

    Full text link
    This paper presents a simple yet effective two-stage framework for semi-supervised medical image segmentation. Our key insight is to explore the feature representation learning with labeled and unlabeled (i.e., pseudo labeled) images to enhance the segmentation performance. In the first stage, we present an aleatoric uncertainty-aware method, namely AUA, to improve the segmentation performance for generating high-quality pseudo labels. Considering the inherent ambiguity of medical images, AUA adaptively regularizes the consistency on images with low ambiguity. To enhance the representation learning, we propose a stage-adaptive contrastive learning method, including a boundary-aware contrastive loss to regularize the labeled images in the first stage and a prototype-aware contrastive loss to optimize both labeled and pseudo labeled images in the second stage. The boundary-aware contrastive loss only optimizes pixels around the segmentation boundaries to reduce the computational cost. The prototype-aware contrastive loss fully leverages both labeled images and pseudo labeled images by building a centroid for each class to reduce computational cost for pair-wise comparison. Our method achieves the best results on two public medical image segmentation benchmarks. Notably, our method outperforms the prior state-of-the-art by 5.7% on Dice for colon tumor segmentation relying on just 5% labeled images.Comment: On submission to TM

    Numerical Simulation and Experimental Study of Deep Bed Corn Drying Based on Water Potential

    Get PDF
    The concept and the model of water potential, which were widely used in agricultural field, have been proved to be beneficial in the application of vacuum drying model and have provided a new way to explore the grain drying model since being introduced to grain drying and storage fields. Aiming to overcome the shortcomings of traditional deep bed drying model, for instance, the application range of this method is narrow and such method does not apply to systems of which pressure would be an influential factor such as vacuum drying system in a way combining with water potential drying model. This study established a numerical simulation system of deep bed corn drying process which has been proved to be effective according to the results of numerical simulation and corresponding experimental investigation and has revealed that desorption and adsorption coexist in deep bed drying

    Angle-Aware and Tone-Aware Luminosity Analysis for Paper Model Surface

    Get PDF
    Luminosity contributes to the paper model surface perception. It has a significant impact on the perception of colour and details. The main purpose of this paper is to study the reflection luminosity of paper model surface which can be of complex or difficult shape surface. The final perception quality of a product, whether it is plain or 3D or other different shape, depends on the surface luminosity perceived by the receptor, such as eyes or measurement instruments. However, the number of parameters and limits of the paper model surface are enormous. It is a time-consuming work to select every parameter by a trial-and-error procedure. For a paper surface under the fixed lighting environment, the most important factors to decide the performance of perception are commonly viewing angles and surface tone. Therefore, the two related terms, perception angle and surface tone, were chosen to work in the analysis process. The final analysis, based on the initial conditions, enabled to predict the perception of paper model surface and to set the optimal perceived angels and tones. It still proposed the next step to model the perception of paper model surface of different shapes in a relatively short period

    Luminance Prediction of Paper Model Surface Based on Non-Contact Measurement

    Get PDF
    The overall appearance perception is affected by luminance perception accuracy and efficiency mostly. The surface luminance prediction correlated with surface angle and surface tone value was performed by measuring and modeling the paper model surface luminance. First, we used a rotating bracket designed to facilitate to set the paper surface angle. Then, we set the surface angle from 5Ā° to 85Ā° at the interval of 5Ā° using the designed rotating bracket. Additionally, the four primary color scales, cyan, magenta, yellow, and black, were printed and set at the designed angle. The angle-ware and tone-ware luminance was measured using spectroradiometer, CS-2000. Finally, we proposed and evaluated a mathematical model to reveal the relationship between luminance and surface angle and surface tone using the least squares method. The results indicated that the surface luminance of paper model could be predicted and obtained quickly and accurately for any surface angles and surface tone values by the proposed prediction model
    • ā€¦
    corecore